197 research outputs found

    Adaptive CAC for SVC Video Traffic in IEEE 802.16 Networks

    Get PDF
    International audienceCall Admission Control is a key function that guarantees the Quality of Service (QoS) for users. In radio networks, this function is usually based on traffic models and ensures that sessions are admitted only if the estimated available bandwidth is enough for the entire call duration. For video on IEEE 802.16, the CAC function must ensure that the bandwidth to be reserved is compatible with the resource availability. For the enhanced SVC (Scalable Video Coding) systems, the CAC function must take into account all the layers and their characteristics. In this paper, we propose an enhanced CAC function for SVC that adapts the admission according to the statistical behaviour of the video sessions. The main goal is to use measurements in the 802.16 base station (BS) to update the traffic model of SVC video flows, this for the different layers of SVC flows. We then use the variability of the traffics generated to adapt the CAC according to the characteristics of incoming flows. To perform that, we use a Markovian model that adapts for each flow instead of using a generic static one as used in most of the papers. Performance evaluation is given to illustrate the interest of our proposal

    The "Object-as-a-Service" paradigm

    Get PDF
    International audienceThe increasing interest about the Internet of Things (IoT) is almost as remarkable than its practical absence in our everyday lives. Announced as the new breakthrough in IT industry, the domain is characterized by a large number of architecture propositions that are in charge of providing a structure for applications creation. These architectures are needed because of the heterogeneity of stakeholders involved in IoT Applications. Programming languages, operating systems, hardware specificities, processing power, memory, network organization, characteristics, constraints, the world of IoT is so diverse. Furthermore, these architectures should provide an easy access to users that are not aware of IT technologies involved. The Services Oriented Computing (SOC) has shown in the past its relevance to the decoupling constraints interoperability among stakeholders. The composition of loosely coupled services facilitates the integration of very varied elements and provides agility in the creation of new applications. But unlike the approach inherited from the SOC in pre-existing services are composed to obtain a specific application, we propose a more dynamic notion of service. Our "Object-as-a-Service" point of view is based on the notion of building dynamically the service needed on each Object and then integrate it in the whole composition. This paper focus on the gain of this approach for the IoT by promoting the "Object-as-a-Service" paradigm as a basis for the creation of dynamic and agile user-made applications

    A Content-based Centrality Metric for Collaborative Caching in Information-Centric Fogs

    Get PDF
    Information-Centric Fog Computing enables a multitude of nodes near the end-users to provide storage, communication, and computing, rather than in the cloud. In a fog network, nodes connect with each other directly to get content locally whenever possible. As the topology of the network directly influences the nodes' connectivity, there has been some work to compute the graph centrality of each node within that network topology. The centrality is then used to distinguish nodes in the fog network, or to prioritize some nodes over others to participate in the caching fog. We argue that, for an Information-Centric Fog Computing approach, graph centrality is not an appropriate metric. Indeed, a node with low connectivity that caches a lot of content may provide a very valuable role in the network. To capture this, we introduce acontent-based centrality (CBC) metric which takes into account how well a node is connected to the content the network is delivering, rather than to the other nodes in the network. To illustrate the validity of considering content-based centrality, we use this new metric for a collaborative caching algorithm. We compare the performance of the proposed collaborative caching with typical centrality based, non-centrality based, and non-collaborative caching mechanisms. Our simulation implements CBC on three instances of large scale realistic network topology comprising 2,896 nodes with three content replication levels. Results shows that CBC outperforms benchmark caching schemes and yields a roughly 3x improvement for the average cache hit rate

    Offloading Content with Self-organizing Mobile Fogs

    Get PDF
    Mobile users in an urban environment access content on the internet from different locations. It is challenging for the current service providers to cope with the increasing content demand from a large number of collocated mobile users. In-network caching to offload content at nodes closer to users alleviate the issue, though efficient cache management is required to find out who should cache what, when and where in an urban environment, given nodes limited computing, communication and caching resources. To address this, we first define a novel relation between content popularity and availability in the network and investigate a node's eligibility to cache content based on its urban reachability. We then allow nodes to self-organize into mobile fogs to increase the distributed cache and maximize content availability in a cost-effective manner. However, to cater rational nodes, we propose a coalition game for the nodes to offer a maximum "virtual cache" assuming a monetary reward is paid to them by the service/content provider. Nodes are allowed to merge into different spatio-temporal coalitions in order to increase the distributed cache size at the network edge. Results obtained through simulations using realistic urban mobility trace validate the performance of our caching system showing a ratio of 60-85% of cache hits compared to the 30-40% obtained by the existing schemes and 10% in case of no coalition

    Radio Resource Sharing for MTC in LTE-A: An Interference-Aware Bipartite Graph Approach

    Get PDF
    International audienceTraditional cellular networks have been considered the most promising candidates to support machine to machine (M2M) communication mainly due to their ubiquitous coverage. Optimally designed to support human to human (H2H) communication, an innovative access to radio resources is required to accommodate M2M unique features such as the massive number of machine type devices (MTDs) as well as the limited data transmission session. In this paper, we consider a simultaneous access to the spectrum in an M2M/H2H coexistence scenario. Taking the advantage of the new device to device (D2D) communication paradigm enabled in long term evolution-advanced (LTE-A), we propose to combine M2M and D2D owing to the MTD low transmit power and thus enabling efficiently the resource sharing. First, we formulate the resource sharing problem as a maximization of the sum-rate, problem for which the optimal solution has been proved to be non deterministic polynomial time hard (NP-Hard). We next model the problem as a novel interference-aware bipartite graph to overcome the computational complexity of the optimal solution. To solve this problem, we consider here a two-phase resource allocation approach. In the first phase, H2H users resource assignment is performed in a conventional way. In the second phase, we introduce two alternative algorithms, one centralized and one semi-distributed to perform M2M resource allocation. The computational complexity of both introduced algorithms whose aim is to solve the M2M resource allocation, is of polynomial complexity. Simulation results show that the semi-distributed M2M resource allocation algorithm achieves quite good performance in terms of network aggregate sum-rate with markedly lower communication overhead compared to the centralized one

    Performance Evaluation of an Object Management Policy Approach for P2P Networks

    Get PDF
    International audienceThe increasing popularity of network-based multimedia applications poses many challenges for content providers to supply efficient and scalable services. Peer-to-peer (P2P) systems have been shown to be a promising approach to provide large-scale video services over the Internet since, by nature, these systems show high scalability and robustness. In this paper, we propose and analyze an object management policy approach for video web cache in a P2P context, taking advantage of object's metadata, for example, video popularity, and object's encoding techniques, for example, scalable video coding (SVC). We carry out tracedriven simulations so as to evaluate the performance of our approach and compare it against traditional object management policy approaches. In addition, we study as well the impact of churn on our approach and on other object management policies that implement different caching strategies. A YouTube video collection which records over 1.6 million video's log was used in our experimental studies. The experiment results have showed that our proposed approach can improve the performance of the cache substantially. Moreover, we have found that neither the simply enlargement of peers' storage capacity nor a zero replicating strategy is effective actions to improve performance of an object management policy

    Hierarchical QOS-aware Routing in Multi-tier Multimedia Wireless Sensor Networks

    Get PDF
    International audienceThe Wireless Multimedia Sensor Networks (WMSN) are a particular case of Wireless Sensors Networks (WSN) as they present a lower density, a limited mobility, require more important resources and need QoS control to transport the multimedia streams. In this paper, we propose, starting from a reference architecture of WMSN, a first approach for hierarchical self-organizing routing ensuring a certain level of QoS

    Modeling and Performance Evaluation of Advanced Diffusion with Classified Data in Vehicular Sensor Networks

    Get PDF
    International audienceIn this paper, we propose a newly distributed protocol called ADCD to manage information harvesting and distribution in Vehicular Sensor Networks (VSN). ADCD aims at reducing the generated overhead avoiding network congestions as well as long latency to deliver the harvested information. The concept of ADCD is based on the characterization of sensed information (i.e. based on its importance, location and time of collection) and the diffusion of this information accordingly. Furthermore, ADCD uses an adaptive broadcasting strategy to avoid overwhelming users with messages in which they have no interest. Also, we propose in this paper a new probabilistic model for ADCD based on Markov chain. This one aims at optimally tune the parameters of ADCD, such as the optimal number of broadcaster nodes. The analytical and simulation results based on different metrics, like the overhead, the delivery ratio, the probability of a complete transmission and the minimal number of hops, are presented. These results illustrate that ADCD allows to mitigate the information redundancy and its delivery with an adequate latency while making the reception of interesting data for the drivers (related to their location) more adapted. Moreover, the ADCD protocol reduces the overhead by 90% compared to the classical broadcast and an adapted version of MobEyes. The ADCD overhead is kept stable whatever the vehicular density

    A Study On Monitoring Overhead Impact on Wireless Mesh Networks

    Get PDF
    International audienceA wireless mesh network is characterized by dynamicity. It needs to be monitored permanently to make sure its properties remain within certain limits in order to provide Quality-of-Service to the end users or to identify possible faults. To establish in every moment what is the appropriate reporting interval of the measured information and the way it is disseminated are important tasks. It has to achieve information quickly enough to solve any issue but excessive as to affect the data traffic. The problem that arises is that the monitoring information needs to travel in the network along with the user traffic and thus, potentially causing congestion. Considering that a wireless mesh network has highly dynamic characteristics there is a need for a good understanding of the influences of disseminating monitoring information in the network along with user traffic. In this paper we provide an evaluation of the network performance while monitoring information is collected from network nodes. We study how different monitoring packet sizes and different reporting frequency of the information can impact the user traffic and compare these values to the case in which only user data travels across the network

    Cross-layer Loss Differentiation Algorithm to Improve TCP Performances in WLANs

    Get PDF
    International audienceLoss Differentiation Algorithms (LDA) are currently used to determine the cause of packet losses with an aim of improving TCP performance over wireless networks. In this work, we propose a cross-layer solution based on two LDA in order to classify the loss origin on an 802.11 link and then to react consequently. The first LDA scheme, acting at the MAC layer, allows differentiating losses due to signal failure caused by displacement or by noise from other loss types. Moreover, in case of signal failure, it adapts the behavior of the MAC layer to avoid a costly end-to-end TCP resolution. The objective of the second LDA scheme, which acts at the TCP layer, is to distinguish a loss due to interferences from those due to congestions and to adapt consequently the TCP behavior. The efficiency of each LDA scheme and of the whole cross-layer solution are then demonstrated through simulations
    • …
    corecore